AMD Ryzen 9 7950X vs Intel Core i9-13900K

Hi,

I am looking at building a new rig for work in the “home office” + some gaming and would appreciate your feedback on what processor to choose in respect to KS performance.

I will use 99% CPU rendering, so GPU rendering is not an option. In the office I have a bigger workstation (Threadripper 3970x + RTX 3090) therefore I won’t need the equal setup at home, but it should be able to still work comfortably fast.

What CPU would be the best choice here?

I’m not sure how it does against a i9-13900K but the freshly released 7800X3D does pretty well compared to his way more expensive brothers.

But if I look at Processor Benchmarks - Geekbench Browser which gives a nice view I see the i9-13900K is still king of the road. But well, consumes also a lot of power and gets pretty hot. If you mostly render using GPU and on a certain budget I would put money in the GPU instead of a CPU. Games don’t use half of the CPU and even than you’re lucky.

Thanks for your feedback.
I always had the idea that the more cores the CPU has the better for KS (looking only at CPU performance)?
The 7800X3D has “only” 8 cores vs the 24 from the i9, so in my mind the i9 or the Ryzen with 16 cores would be faster? Is the, more cores = better, not true anymore or is the new processor tech different by now? Just curious to understand the nuances better in respect to KS performance.

The example for the 7800X3D was mainly since in a lot of gaming benchmarks it beats the way more expensive 7950X and the 7950X3D. That’s because games don’t run over a lot of cores/threads because it’s hard to split gaming things over multiple cores. I do some FlightSimulator sometimes which is I think about the most heavy game/sim you can use currently. But it uses only 2 cores/4 threads currently which make people complain a lot but that’s just how it is. Even with a 4090 you run into the limit of the CPU since only 1/4 of the actual CPU is used.

With renders it’s easy to split a task to multiple cores. You just divide the render into a grid and every core starts rendering his own square of the grid. So you use 100% CPU and the faster the CPU is, the better.

That would also mean in theory that more cores are faster in KeyShot. Which is basically true but if you look at CPU’s you will see that CPU’s with a lot of cores have a much lower clock speed. I think even the most expensive threadripper has 4.5Ghz max which is much lower than some consumer grade CPU’s who can do almost 6Ghz (with a lot of heat and power consumption).

The threadripper at your office has a nice 32 cores so that will compensate a bit for the lower clock speed but it doesn’t really justify it’s price if you look at the render speed. It can justify it’s price if you want to use a massive amount of memory or run it with 4 GPU’s since the CPU has more PCI pipelines and that can be worth it.

Personally I wouldn’t buy a Threadripper like at your office since it costs almost 5x the price of a 13900K. That means that you could put 4 PC’s together with a 13900K and render over network with 4x24 cores = 96 cores which makes 192 threads. No Threadripper can even get close.

If I look at the benchmark topic I see the Threadripper does quite a lot better than a 13900K but I’m not really sure why. It has more cache which can help. In the V-Ray benchmarks the difference is less big.

In the end it’s a budget and personal taste thing for me the costs of a Threadripper would be too high looking at the advantage but I also render mostly on GPU since it’s much faster anyway. For gaming a i9-13900K is much faster since it’s way higher clock speed and the fact games don’t use a lot of cores/threads.

1 Like

Thanks again, great info/feedback.

Exactly, I am looking for a less expensive alternative where I can still work “comfortably” from home.
When I bought the 3970x it was around 2k, by now the prices are insane.

That’s where I saw a score of 5.49 and I thought “not bad for the price.”

I think I will go with the 13900K then. Thanks again!

Just a note, the 13900K/F/KS get really hot so I also would combine it with a nice cooling solution. Doesn’t really have to be an AIO water cooling kit since I think a good air cooler will also be possible but the last one will be huge compared to an AIO solution :slight_smile:

The 13900KS version has actually a turbo clock of 6Ghz which can be nice in gaming, it is around 220 euro more I think so not sure if you think it’s worth it. If it uses all cores you won’t get the 6Ghz but maybe it has some overclock margin. I’m still on a i9-9900K so bit behind for that matter :wink:

yeah, my previous workstations had AIOs and both failed within months of use, since then I only go with air coolers. :slight_smile:

1 Like

Here my Corsair AIO is still running but with a nice air cooler I think the temperatures wouldn’t been higher. Looks clean in the case though.

I am curious, is there a particular reason you prefer CPU rendering in KS? I wasn’t aware there was any added bonus aside from maybe more consistency in animations?

1 Like

I will try to explain you this situation about what is faster and why :slight_smile:

First of all the current fastest consumer CPU is intel core i9 13th 13900 Family, so that mean all the 13900K/KS/KF.

AMD Ryzen 9 7950X is just by 7-9% percent Slower.

The main factor when it comes to CPU speed is transistors, you probably heard of them, they are small couple billions on small size wafer and they are doing all the calculations, thats the reason why Apple dominating in their own chips segment, they are not using more cores but more transistors and low core count so all apps can be optimized evenly across all products.

Example, if AMD and Intel release same 10nm CPU with 5GHZ and both have 10 billions transistor, Intel will have 2 Cores and AMD 20, SPEED is exactly same, its just mean AMD can do more parallel tasks/calculations, but the Speed will be exactly same.

Of course there is gonna be always small difference, different GHZ, Different size of nanometers even from which part of wafer is CPU. But if you start googling how many transistors CPU got, its just like everyone got different information :slight_smile: and there is just lot of different things which play role, but core count is mostly not one of them.

Hello,

Of course there is a difference and trade off in both. Generally CPU create more accurate images, you can see this for example when you check sample count and noise level. But GPU is much more faster with Brute force. When it comes to technology these days GPU is much more closer to CPU with quality then CPU to GPU with speed of rendering.

But I still prefer CPU, I am just old school :slight_smile:

CPU rendering just with CPU,
GPU rendering need CPU to create all the data for rendering and do some calculations at the beginning when you start rendering with GPU, its all happening cause of CPU.

GPU got limited VRAM, so that means if you are working with big resolution and maybe with really big scene your render just crash cause all the VRAM is full, of course you can use RAM but when GPU rendering with through RAM, it need to acces ram with PCIE lines and bus feed on the motherboard which is much more slower. It’s just slow and you gonna notice how slow it is. Since this VRAM CPU is much more scalable so that means when you use two GPU each with 16GB vram you not gonna have 32, nope you just have 16GB of vram and scene is loaded in each 16GB same cause each GPU need same data and cant access it from the other one.

But You can use multiple CPU which share the same RAM, its much more Scalable when it comes to rendering larger scenes.

But Generally I would recommended GPU these days, its much more faster, I mean much more faster.

1 Like

That’s not entirely correct. With GPU renders in KeyShot you can’t use your normal RAM, it’'s just the VRAM on the GPU. And if you’ve two GPU’s which support NVLink as far as I know it doubles the ram in most renderers as well as KeyShot. But since NVidia decided not to put it on a 4090 anymore a lot of ram (48GB) for a nice price would mean buying 2 second hand 3090(Ti’s).

I would also prefer GPU and if you manage your textures a bit I think you can render quite a lot with 24GB. And the render time difference is so huge I would only render on CPU if I got paid by the hour :wink:

And about the speed comparison for CPU, the core count does matters of course. It’s the same with the CUDA Core count if you render with GPU. If you’ve 20 cores at 5Ghz or 2 cores at 5Ghz that will mean the one with 20 cores is 10x as fast. Simply because a render gets cut into squares so every core renders it’s own square of the image. So with the one with 2 cores you render 2 squares at the same time while one with 20 cores will render 20 squares at the same time. That’s basically why rendering things give you the benefit of a massive amount of cores like the Threadrippers. Next the core count the amount of Ghz plays a major role and the more cores the less Ghz a processor has. But while a Threadripper can have 128 cores a NVidia 3090 has over 10000. Totally different cores but that gives an idea about how many instructions a GPU can do parallel.

Unlike games or software with renders you can easily assign one square to one core while with tasks in software or with games it’s way harder to split tasks for the game into different jobs for different cores. That’s why a lot of games just use 1 core/2 threads.

In V-Ray for example, you can see all different squares and every square gets rendered by one CPU core. Depending on what’s inside that tiny square one square is faster finished than the other but the core which is finished just goes to the next square.

1 Like

I think my english is so bad that I probably dont make it clear sometimes when I read my reply I cant even know what’s going on :slight_smile: with that core count when it comes to CPU, I explaining why you cant compare two same CPU with different core counts and why sometimes is faster CPU with less core count. I use example same CPU with same amount of transistors. So two CPUs with same amount of transistors but different amount of cores will have the same SPEED and only difference is gonna be in the preview when you got more buckets. But if you got 20 buckets or 2 buckets moving in the viewport doesnt matter cause both CPU got same amount of transistors so they can calcualte exact same amount of information.

Example:

CPU 20 cores 1 Billion transistors
CPU 10 cores 2 Billion transistors,

Faster one is with 10 cores since he got double the value of computing / mathematical power of transistors, which calculating results. Yes you will have less small squares on the screen in the viewport but thats just doesnt matter. The speed of this small squares is based on the value how much information they can process.


Sorry, I am not good in Photoshop :slight_smile: But as you can see on the picture, why you should think the more cores with less transistors should beat less core cpu? yes less cores but they got 4X more computing power. This is what I try to explain in first post, I mention transistors. The rendering speed of this CPUs is exactly same.

You can also use this CPU image as example of bucket size in the viewport smal squares in the vray. yes you can have small 16 but if you got only 4 of them with 4 times more power so it will be the same.

In general Terms
In these days more cores is actually more transistors cause companies designed it this way, they are doing much LARGER CPUS when you look how big they are, its the same nanometer process but they are bigger so they are able to put more cores. You can still find some ultra expensive intel xeons with double or even 4 times more cores than your intel i9 13900 and still this mainstream intel 13900 kick that expensive Xeons, even when you put two of them on motherboard.

I am also using V-ray but since V-ray 5 when they introduce adaptive bucket size, I don’t think it makes difference. If you remember on the older versions you have to set up bucket size to smaller value so that end rendering was faster when other cores got job done but there was no space for bucket size. I am not mentioning this squares but I was trying to explain why core count doesnt count when it comes to speed in CPU, bucket size from Vray is representation of cores as you mention.

More cores give you only faster preview since they are calculating smaller size, it was main topic before Vray 5 if I remember right in this case

I know KeyShot cant use RAM when you rendering with GPU it was just example, cause some engines can provide this, but its slow since they have to boot everything from pcie lines and bus feed. No ones even using it this way.

Why more core generally is better?
You can make them running faster so not just 5ghz but 6ghz, yes CPU is gonna be more hotter and will drain more power which is the same, but you can effectively control more cores and throttle individual down, so thats one of the many reasons why CPUs got more cores, you can throttle individual cores down without losing to much power, but if your 4 core CPU reach hot temperature and system start throttle one core down you loose 25 percent of power, reason why so many cheap laptops are super slow, they overheat to fast, and the regulation of that heat comes at big cost since they got less cores.

I don’t know if I can make it even more clear, but what I was pointing out, cores doesnt really matter in terms of speed, reason why they are doing more cores is to put more transistors so they bump the size of the chipset or they create smaller nanometers technology…

GPU cores and CPU they are different, cpu doing much more precise operation while GPU cant, there are specific task which GPU just cant do, some mathematical things, I dont even know what it is, but you can google it.

I would love to agree with you but transistors don’t mean much if it’s about rendering. More transistors mean you can do more complicated things in one go so faster. With rendering you don’t have complex calculations. It’s pretty basic physics so are the calculations. That’s also the reason why a GPU can handle those things perfectly well.

Why not advertise the amount of transistors?

If hypothetically transistors would make a difference Intel and AMD would advertise with the number of transistors instead of the number of cores and GHz. For things like AI, deep learning a large number of transistors can help a lot since the CPU is able to store/process more 0’s and 1’s which can help to reduce the number of CPU-cycles needed for a calculation.

About your comparison image

Because your image I see where you get your theory from. A instruction (whatever instruction) runs through a CPU cycle and if needed it stores values in the transistors and up to next instruction. You don’t need/care as a programmer how much transistors get used, you optimize the amount of cycles you need for a certain task. I don’t have a clue but if every ray of light needs 10 CPU-cycles no matter of the amount of transistors, 10 CPU cycles are needed so they all have to pass the CPU. The amount of transistors doesn’t change the amount of cycles needed in renders and a CPU can’t suddenly do more than one instruction at the time.

It’s not a new CPU with 16 cores has suddenly 1/4 of the amount of transistors per core, no it has still about the same (or more, since new generation, thinner production process) transistors per core. It’s not that you need all the transistors in a core with rendering and a lot of other things. You need to be able to split buckets to different cores so you can do things in parallel. So in your image CPU2 would be 4x as fast as CPU1.

Really simply put, if you’ve a 2 core 40 billion transistors CPU you can still render only 2 buckets since you can only use two cores in parallel and every light beam of the render has to be processed by CPU cycles which would be 2 (number of cores) at the time. A 16 core with only half (20 billion) transistors will still be 8x as fast (since the 16 cores can do 16 cycles parallel). Of course, if the amount Ghz is the same. Transistors are just switches and only matter if an instruction needs a lot of them to be able to use f.e. only 4 CPU-cycles instead of 10. Rendering is pretty basic for that matter so you need to be able to have many cores so you can render the core amount of ‘buckets’ at the same time.

Bottleneck

The bottleneck for simple calculations is not in the amount of transistors but in the speed calculations of billions beams of light can be processed, in parallel. So you split them over the cores and the more Ghz the more CPU-cycles per second.

That’s also why a consumer i9-13900 kicks a Xeons *ss with renders because the calculations are simple. Same with the GPU’s, for plain renders you are way faster with a gamers 4090 than with a terribly expensive Axxxx-series card. But if you want to do way more complicated calculations like deep learning or AI suddenly the A-series card starts to perform way better than just looking at the amount of CUDA cores. It’s made for really complex calculations.

If you overclock for rendering you don’t want to have any cores throttling down. I’ve a i9-9900K and by default at 100% usage of all cores only 2 work at 5Ghz. I can change it and let them all run at 5Ghz which indeed uses more power but also gives me more render speed (and less stability). The throttling I’ve disabled so if things go wrong it’s simply crashes KS or gives me a blue screen. I’ve never actually tested it with KS but I think if I put all cores on 4.9Ghz it’s stable and works nice. But with that setting I would lose performance in Flight Simulator since it will only use 2 cores / 4 threads and instead of 5Ghz they will run 4.9Ghz.

Small, smaller, smallest

The nm is not the chip size, it’s the thickness of a layer. The thinner the layer gets the more layers you can put inside a core and as result more transistors. Those layers getting thinner will end somewhere with the current technology since the power/heat/voltage makes it harder every step. Also why Moores law is bit hard to reach these days.

V-Rays buckets

V-Rays adaptive bucket size is a pretty simple trick you can also notice in the viewport if you don’t turn on progressive rendering. If you have 12 cores but only 6 buckets to go it makes the size of the bucket half so every core can still work for 100% and it keeps dividing buckets till all is finished. So where former V-Ray only used a percentage of the cores available it now keeps them occupied until the very last bucket. This makes perfectly clear it’s the amount of cores that makes a different with renders.

Final words

Anyway, hope I made clear that the amount of transistors says mainly something about the complexity of instructions it can handle in a single CPU-cycle. You can’t divide CPU instructions over a x-number of transistors, so they don’t make things faster as well. You need multiple cores to do as many things parallel as you can.

And no worries about your English, it’s totally fine. Not my first language as well :slight_smile: